In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
It is well established in neuroscience that color vision plays an essential part in the human visual perception system. Meanwhile, many novel designs for computer vision inspired by human vision have achieved success in a wide range of tasks and applications. Nonetheless, how color differences affect machine vision has not been well explored. Our work tries to bridge this gap between the human color vision aspect of visual recognition and that of the machine. To achieve this, we curate two datasets: CIFAR10-F and CIFAR100-F, which are based on the foreground colors of the popular CIFAR datasets. Together with CIFAR10-B and CIFAR100-B, the existing counterpart datasets with information on the background colors of CIFAR test sets, we assign each image based on its color contrast level per its foreground and background color labels and use this as a proxy to study how color contrast affects machine vision. We first conduct a proof-of-concept study, showing the effect of color difference and validate our datasets. Furthermore, on a broader level, an important characteristic of human vision is its robustness against ambient changes; therefore, drawing inspirations from ophthalmology and the robustness literature, we analogize contrast sensitivity from the human visual aspect to machine vision and complement the current robustness study using corrupted images with our CIFAR-CoCo datasets. In summary, motivated by neuroscience and equipped with the datasets we curate, we devise a new framework in two dimensions to perform extensive analyses on the effect of color contrast and corrupted images: (1) model architecture, (2) model size, to measure the perception ability of machine vision beyond total accuracy. We also explore how task complexity and data augmentation play a role in this setup. Our results call attention to new evaluation approaches for human-like machine perception.
translated by 谷歌翻译
基于AI的蛋白质结构预测管道(例如AlphaFold2)已达到了几乎实验的准确性。这些高级管道主要依赖于多个序列比对(MSA)和模板作为输入来从同源序列中学习共进化信息。但是,从蛋白质数据库中搜索MSA和模板很耗时,通常需要数十分钟。因此,我们尝试通过仅使用蛋白质的主要序列来探索快速蛋白质结构预测的极限。提出了Helixfold单一的形式将大规模蛋白质语言模型与AlphaFold2的优质几何学习能力相结合。我们提出的方法,Helixfold单个,首先预先培训是一种大规模蛋白质语言模型(PLM),使用了数以千计的主要序列利用自我监督的学习范式,将用作MSA和模板的替代方法共同进化信息。然后,通过将预训练的PLM和AlphaFold2的必需组件组合在一起,我们获得了一个端到端可区分模型,以仅从主要序列预测原子的3D坐标。 Helixfold-Single在数据集CASP14和Cameo中得到了验证,通过基于MSA的方法,具有大型同源家庭的基于MSA的方法,从而实现了竞争精度。此外,与主流管道进行蛋白质结构预测相比,Helixfold单个的时间比主流管道的时间少得多,这表明其在需要许多预测的任务中的潜力。 HelixFold-Single的守则可在https://github.com/paddlepaddle/paddlehelix/tree/dev/dev/pprotein_folding/helixfold-single上获得,我们还在https://paddlehelix.baidu.com上提供稳定的Web服务。 /app/drug/protein-single/prevast。
translated by 谷歌翻译
准确的蛋白质结构预测可以显着加速生命科学的发展。 Alphafold2的准确性是边界端到端结构预测系统,已经接近实验确定技术的准确性。由于复杂的模型体系结构和大量的内存消耗,因此需要大量的计算资源和时间来实施从头开始实施Alphafold2的训练和推断。对于大多数个人和机构来说,运行原始AlphaFold2的成本都是昂贵的。因此,降低这一成本可以加速生命科学的发展。我们使用PaddlePaddle(即HelixFold)实现Alphafold2,以提高训练和推理速度并减少记忆消耗。操作员融合,张量融合和混合并行性计算改善了性能,而通过重新计算,BFLOAT16和内存读/写入/编写就场,内存进行了优化。与原始的Alphafold2(由JAX实施)和OpenFold(由Pytorch实施)相比,HelixFold仅需7.5天即可完成完整的端到端培训,并且在使用Hybrid ParalleleSism时只需要5.3天,而Alphafold2和OpenFold都可以使用11个。天。 Helixfold节省了1倍的训练时间。我们验证了HelixFold的准确性可能与CASP14和CAMAO数据集上的Alphafold2相当。 HelixFold的代码可免费下载:https://github.com/paddlepaddle/paddlehelix/paddlehelix/tree/dev/dev/pprotein_folding/helixfold,我们还在https://paddlehelix.baidu.com/com上提供稳定的Web服务。应用程序/药物/蛋白质/预测。
translated by 谷歌翻译
在本文中,我们提出了PETRV2,这是来自多视图图像的3D感知统一框架。基于PETR,PETRV2探讨了时间建模的有效性,该时间建模利用先前帧的时间信息来增强3D对象检测。更具体地说,我们扩展了PETR中的3D位置嵌入(3D PE)进行时间建模。 3D PE可以在不同帧的对象位置上实现时间对齐。进一步引入了特征引导的位置编码器,以提高3D PE的数据适应性。为了支持高质量的BEV分割,PETRV2通过添加一组分割查询提供了简单而有效的解决方案。每个分割查询负责分割BEV映射的一个特定补丁。 PETRV2在3D对象检测和BEV细分方面实现了最先进的性能。在PETR框架上还进行了详细的鲁棒性分析。我们希望PETRV2可以作为3D感知的强大基准。代码可在\ url {https://github.com/megvii-research/petr}中获得。
translated by 谷歌翻译
在本文中,我们开发了用于多视图3D对象检测的位置嵌入转换(PETR)。PETR将3D坐标的位置信息编码为图像特征,从而产生3D位置感知功能。对象查询可以感知3D位置感知功能并执行端到端对象检测。PETR在标准Nuscenes数据集上实现了最先进的性能(50.4%NDS和44.1%的地图),并在基准中排名第一。它可以作为未来研究的简单但强大的基准。代码可在\ url {https://github.com/megvii-research/petr}中获得。
translated by 谷歌翻译
最近,神经技术已用于自动生成源代码。这些方法在有望获得声明语言的同时,在命令式语言的数据集上的性能差得多。由于通常将声明性语言嵌入了现实世界软件开发中的命令式语言(即Turducken式编程)中,因此声明语言的有希望的结果几乎不会导致手动软件开发工作大幅减少。在本文中,我们定义了一项新的代码生成任务:鉴于自然语言评论,此任务旨在用嵌入式声明语言以基本命令性语言生成程序。据我们所知,这是第一个Turducken风格的代码生成任务。对于此任务,我们将Lyra:Python中的数据集提出了嵌入式SQL。该数据集包含来自现实世界项目的2,000个精心注释的数据库操作程序。每个程序都与中文评论和英文评论配对。在我们的实验中,我们采用了变压器,伯特风格和GPT风格的模型作为基础。在最佳环境中,GPT风格模型的生成性能比其他模型更好,在使用中文和英语评论时,AST精确匹配的精度分别为24%和25.5%。因此,我们认为Lyra为代码生成提供了新的挑战。但是,克服这一挑战可能会大大提高代码生成技术在现实世界软件开发中的适用性。
translated by 谷歌翻译
现有图形神经网络(GNNS)很大程度上依赖于节点嵌入品,其表示节点作为其标识,类型或内容的矢量。但是,具有未分配的节点的图表广泛存在于现实世界中的应用程序(例如,匿名社交网络)。以前的GNN可以将随机标签分配给节点(将伪影介绍给GNN)或分配给所有节点的一个嵌入(这不能明确区分一个节点)。此外,当这些GNN应用于未分配的节点分类问题时,它们具有不需要的标准性属性,其基本上无法以多种可能的输出来解决数据。在本文中,我们分析了节点分类问题现有方法的限制。灵感来自我们的分析,我们提出了一种推广的标准性质和优先标记技术,满足所需的属性渐近。实验结果表明,我们在几种未分配的节点分类任务中实现了高性能。
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译